122 research outputs found

    Spatial cueing deficits in dyslexia reflect generalised difficulties with attentional selection

    Get PDF
    AbstractTraditionally, explanations of spatial cueing effects posit the operation of orienting mechanisms that act to reposition the spatial locus of attention. This process is often viewed to be analogous to the movement of an attentional ‘spotlight’ across the visual field to the cued region and is thought to occur either in an exogenous or endogenous manner, depending on the nature of the cue. In line with this view, anomalous findings in dyslexic groups using paradigms involving brief peripheral cues have been interpreted as evidence for a particular deficiency with stimulus-driven, exogenous orienting. Here, we demonstrate that an exogenous orienting deficit is an unfeasible explanation of recent findings in which dyslexic individuals fail to derive benefit from peripheral cues indicating the location of a target in a single fixation visual search task. In a series of experiments examining cueing effects in normal readers, we find no evidence to support the operation of an attentional orienting mechanism that is (i) fast but transient; (ii) automatic and involuntary; and (iii) preferentially driven by abrupt luminance transients. Rather, we find that the magnitude of obtained benefits is primarily determined by the informational value of the cue (irrespective of how information is conveyed) and the accessibility of the target representation once the cue had been delivered. In addition, we show that dyslexic individuals’ difficulties with cued search do not reflect problems with detecting and localising the cue, and generalise to different cue types. These results are consistent with a general weakness of attentional selection in dyslexia

    Adaptation to implied tilt: extensive spatial extrapolation of orientation gradients

    Get PDF
    To extract the global structure of an image, the visual system must integrate local orientation estimates across space. Progress is being made toward understanding this integration process, but very little is known about whether the presence of structure exerts a reciprocal influence on local orientation coding. We have previously shown that adaptation to patterns containing circular or radial structure induces tilt-aftereffects (TAEs), even in locations where the adapting pattern was occluded. These spatially “remote” TAEs have novel tuning properties and behave in a manner consistent with adaptation to the local orientation implied by the circular structure (but not physically present) at a given test location. Here, by manipulating the spatial distribution of local elements in noisy circular textures, we demonstrate that remote TAEs are driven by the extrapolation of orientation structure over remarkably large regions of visual space (more than 20°). We further show that these effects are not specific to adapting stimuli with polar orientation structure, but require a gradient of orientation change across space. Our results suggest that mechanisms of visual adaptation exploit orientation gradients to predict the local pattern content of unfilled regions of space

    Visual crowding is unaffected by adaptation-induced spatial compression

    Get PDF
    It has recently been shown that adapting to a densely textured stimulus alters the perception of visual space, such that the distance between two points subsequently presented in the adapted region appears reduced (Hisakata, Nishida, & Johnston, 2016). We asked whether this form of adaptation-induced spatial compression alters visual crowding. To address this question, we first adapted observers to a dynamic dot texture presented within an annular region surrounding the test location. Following adaptation, observers perceived a test array comprised of multiple oriented dot dipoles as spatially compressed, resulting in an overall reduction in perceived size. We then tested to what extent this spatial compression influences crowding by measuring orientation discrimination of a single dipole flanked by randomly oriented dipoles across a range of separations. Following adaptation, we found that the magnitude of crowding was predicted by the physical-rather than perceptual-separation between centre and flanking dipoles. These findings contrast with previous studies in which crowding has been shown to increase when motion-induced position shifts act to reduce apparent separation (Dakin, Greenwood, Carlson, & Bex, 2011; Maus, Fischer, & Whitney, 2011)

    Perceptual learning reconfigures the effects of visual adaptation

    Get PDF
    Our sensory experiences over a range of different timescales shape our perception of the environment. Two particularly striking short-term forms of plasticity with manifestly different time courses and perceptual consequences are those caused by visual adaptation and perceptual learning. Although conventionally treated as distinct forms of experience-dependent plasticity, their neural mechanisms and perceptual consequences have become increasingly blurred, raising the possibility that they might interact. To optimize our chances of finding a functionally meaningful interaction between learning and adaptation, we examined in humans the perceptual consequences of learning a fine discrimination task while adapting the neurons that carry most information for performing this task. Learning improved discriminative accuracy to a level that ultimately surpassed that in an unadapted state. This remarkable improvement came at a price: adapting directions that before learning had little effect elevated discrimination thresholds afterward. The improvements in discriminative accuracy grew quickly and surpassed unadapted levels within the first few training sessions, whereas the deterioration in discriminative accuracy had a different time course. This learned reconfiguration of adapted discriminative accuracy occurred without a concomitant change to the characteristic perceptual biases induced by adaptation, suggesting that the system was still in an adapted state. Our results point to a functionally meaningful push–pull interaction between learning and adaptation in which a gain in sensitivity in one adapted state is balanced by a loss of sensitivity in other adapted states

    Visual perception in dyslexia is limited by sub-optimal scale selection

    Get PDF
    Readers with dyslexia are purported to have a selective visual impairment but the underlying nature of the deficit remains elusive. Here, we used a combination of behavioural psychophysics and biologically-motivated computational modeling to investigate if this deficit extends to object segmentation, a process implicated in visual word form recognition. Thirty-eight adults with a wide range of reading abilities were shown random-dot displays spatially divided into horizontal segments. Adjacent segments contained either local motion signals in opposing directions or analogous static form cues depicting orthogonal orientations. Participants had to discriminate these segmented patterns from stimuli containing identical motion or form cues that were spatially intermingled. Results showed participants were unable to perform the motion or form task reliably when segment size was smaller than a spatial resolution (acuity) limit that was independent of reading skill. Coherence thresholds decreased as segment size increased, but for the motion task the rate of improvement was shallower for readers with dyslexia and the segment size where performance became asymptotic was larger. This suggests that segmentation is impaired in readers with dyslexia but only on tasks containing motion information. We interpret these findings within a novel framework in which the mechanisms underlying scale selection are impaired in developmental dyslexia

    Encoding of rapid time-varying information is impaired in poor readers

    Get PDF
    A characteristic set of eye movements and fixations are made during reading, so the position of words on the retinae is constantly being updated. Effective decoding of print requires this temporal stream of visual information to be segmented or parsed into its constituent units (e.g., letters or words). Poor readers' difficulties with word recognition could arise at the point of segmenting time-varying visual information, but the mechanisms underlying this process are little understood. Here, we used random-dot displays to explore the effects of reading ability on temporal segmentation. Thirty-eight adult readers viewed test stimuli that were temporally segmented by constraining either local motions or analogous form cues to oscillate back and fourth at each of a range of rates. Participants had to discriminate these segmented patterns from comparison stimuli containing the same motion and form cues but these were temporally intermingled. Results showed that the motion and form tasks could not be performed reliably when segment duration was shorter than a temporal resolution (acuity) limit. The acuity limits for both tasks were significantly and negatively correlated with reading scores. Importantly, the minimum segment duration needed to detect the temporally segmented stimuli was longer in relatively poor readers than relatively good readers. This demonstrates that adult poor readers have difficulty segmenting temporally changing visual input particularly at short segment durations. These results are consistent with evidence suggesting that precise encoding of rapid time-varying information is impaired in developmental dyslexia

    Why is the processing of global motion impaired in adults with developmental dyslexia?

    Get PDF
    Individuals with dyslexia are purported to have a selective dorsal stream impairment that manifests as a deficit in perceiving visual global motion relative to global form. However, the underlying nature of the visual deficit in readers with dyslexia remains unclear. It may be indicative of a difficulty with motion detection, temporal processing, or any task that necessitates integration of local visual information across multiple dimensions (i.e. both across space and over time). To disentangle these possibilities we administered four diagnostic global motion and global form tasks to a large sample of adult readers (N = 106) to characterise their perceptual abilities. Two sets of analyses were conducted. First, to investigate if general reading ability is associated with performance on the visual tasks across the entire sample, a composite reading score was calculated and entered into a series of continuous regression analyses. Next, to investigate if the performance of readers with dyslexia differs from that of good readers on the visual tasks we identified a group of forty-three individuals for whom phonological decoding was specifically impaired, consistent with the dyslexic profile, and compared their performance with that of good readers who did not exhibit a phonemic deficit. Both analyses yielded a similar pattern of results. Consistent with previous research, coherence thresholds of poor readers were elevated on a random-dot global motion task and a spatially one-dimensional (1-D) global motion task, but no difference was found on a static global form task. However, our results extend those of previous studies by demonstrating that poor readers exhibited impaired performance on a temporally-defined global form task, a finding that is difficult to reconcile with the dorsal stream vulnerability hypothesis. This suggests that the visual deficit in developmental dyslexia does not reflect an impairment detecting motion per se. It is better characterised as a difficulty processing temporal information, which is exacerbated when local visual cues have to be integrated across multiple (>2) dimensions

    Asynchrony adaptation reveals neural population code for audio-visual timing

    Get PDF
    The relative timing of auditory and visual stimuli is a critical cue for determining whether sensory signals relate to a common source and for making inferences about causality. However, the way in which the brain represents temporal relationships remains poorly understood. Recent studies indicate that our perception of multisensory timing is flexible—adaptation to a regular inter-modal delay alters the point at which subsequent stimuli are judged to be simultaneous. Here, we measure the effect of audio-visual asynchrony adaptation on the perception of a wide range of sub-second temporal relationships. We find distinctive patterns of induced biases that are inconsistent with the previous explanations based on changes in perceptual latency. Instead, our results can be well accounted for by a neural population coding model in which: (i) relative audio-visual timing is represented by the distributed activity across a relatively small number of neurons tuned to different delays; (ii) the algorithm for reading out this population code is efficient, but subject to biases owing to under-sampling; and (iii) the effect of adaptation is to modify neuronal response gain. These results suggest that multisensory timing information is represented by a dedicated population code and that shifts in perceived simultaneity following asynchrony adaptation arise from analogous neural processes to well-known perceptual after-effects

    Size-induced distortions in perceptual maps of visual space

    Get PDF
    In order to interact with our environment, the human brain constructs maps of visual space. The orderly mapping of external space across the retinal surface, termed retinotopy, is maintained at subsequent levels of visual cortical processing and underpins our capacity to make precise and reliable judgments about the relative location of objects around us. While these maps, at least in the visual system, support high precision judgments about the relative location of objects, they are prone to significant perceptual distortion. Here, we ask observers to estimate the separation of two visual stimuliVa spatial interval discrimination task. We show that large stimulus sizes require much greater separation in order to be perceived as having the same separation as small stimulus sizes. The relationship is linear, task independent, and unrelated to the perceived position of object edges. We also show that this type of spatial distortion is not restricted to the object itself but can also be revealed by changing the spatial scale of the background, while object size remains constant. These results indicate that fundamental spatial properties, such as retinal image size or the scale at which an object is analyzed, exert a marked influence on spatial coding

    Perceptual learning shapes multisensory causal inference via two distinct mechanisms

    Get PDF
    To accurately represent the environment, our brains must integrate sensory signals from a common source while segregating those from independent sources. A reasonable strategy for performing this task is to restrict integration to cues that coincide in space and time. However, because multisensory signals are subject to differential transmission and processing delays, the brain must retain a degree of tolerance for temporal discrepancies. Recent research suggests that the width of this 'temporal binding window' can be reduced through perceptual learning, however, little is known about the mechanisms underlying these experience-dependent effects. Here, in separate experiments, we measure the temporal and spatial binding windows of human participants before and after training on an audiovisual temporal discrimination task. We show that training leads to two distinct effects on multisensory integration in the form of (i) a specific narrowing of the temporal binding window that does not transfer to spatial binding and (ii) a general reduction in the magnitude of crossmodal interactions across all spatiotemporal disparities. These effects arise naturally from a Bayesian model of causal inference in which learning improves the precision of audiovisual timing estimation, whilst concomitantly decreasing the prior expectation that stimuli emanate from a common source
    corecore